16 research outputs found

    Automated Segmentation of Large 3D Images of Nervous Systems Using a Higher-order Graphical Model

    Get PDF
    This thesis presents a new mathematical model for segmenting volume images. The model is an energy function defined on the state space of all possibilities to remove or preserve splitting faces from an initial over-segmentation of the 3D image into supervoxels. It decomposes into potential functions that are learned automatically from a small amount of empirical training data. The learning is based on features of the distribution of gray values in the volume image and on features of the geometry and topology of the supervoxel segmentation. To be able to extract these features from large 3D images that consist of several billion voxels, a new algorithm is presented that constructs a suitable representation of the geometry and topology of volume segmentations in a block-wise fashion, in log-linear runtime (in the number of voxels) and in parallel, using only a prescribed amount of memory. At the core of this thesis is the optimization problem of finding, for a learned energy function, a segmentation with minimal energy. This optimization problem is difficult because the energy function consists of 3rd and 4th order potential functions that are not submodular. For sufficiently small problems with 10,000 degrees of freedom, it can be solved to global optimality using Mixed Integer Linear Programming. For larger models with 10,000,000 degrees of freedom, an approximate optimizer is proposed and compared to state-of-the-art alternatives. Using these new techniques and a unified data structure for multi-variate data and functions, a complete processing chain for segmenting large volume images, from the restoration of the raw volume image to the visualization of the final segmentation, has been implemented in C++. Results are shown for an application in neuroscience, namely the segmentation of a part of the inner plexiform layer of rabbit retina in a volume image of 2048 x 1792 x 2048 voxels that was acquired by means of Serial Block Face Scanning Electron Microscopy (Denk and Horstmann, 2004) with a resolution of 22nm x 22nm x 30nm. The quality of the automated segmentation as well as the improvement over a simpler model that does not take geometric context into account, are confirmed by a quantitative comparison with the gold standard

    To my parents for their life-long love and endless encouragement

    No full text
    letzten Jahre möglich gemacht. Es wurden bereits Algorithmen zur Quantifizierung der räumlichen Anordnung von Chromosomen bzw. Genen entwickelt, wobei die meisten dieser Methoden jedoch auf zweidimensionalen (2D) Bilddaten basieren. Um dreidimensionale (3D) konfokale Bilddaten verarbeiten zu können, ist es notwendig neue Algorithmen zu entwickeln, die auf 3D Datensätzen basieren. In dieser Arbeit werden neue Methoden zur Beschreibung, Analyse und Visualisierung der 3D Verteilung von Chromosomen und Genen in fixierten Zellkernen in 3D Bilddaten präsentiert, die basierend auf Konzepten der objektorientierten Programmierung in de

    Oral examination: 12/18/2002 Automated defect detection and evaluation in X-ray CT images

    No full text
    Die Röntgencomputertomographie findet zunehmend Einsatz in der industriellen Qualitätskontrolle. Es ist jedoch eine grosse Herausforderung, sie in der vollautomatisierten Fertigung zu verwenden, da dies u.a. robuste Volumenbildverarbeitung auf verrauschten Bildern erfordert. Die vorliegende Arbeit präsentiert Methoden zur automatischen Detektion und geometrischen Beschreibung von Defekten. Die Erkennun

    Bayesian Estimation for White Light

    No full text
    In this thesis, a new approach for the reconstruction of height maps from scanning white light interferometry is presented. This method unifies the conventional steps of pre- and postprocessing within Bayesian inference. An adept formulation of the prior allows for the exact computation of the height estimate, obviating the need for stochastic sampling or simulation methods. In conventional surface estimation for white light interferometry, a primary height map is calculated pixel-wise from the raw data, followed by a postprocessing step where outliers and other measurement artifacts are removed. Established and novel algorithms for both steps are discussed. The techniques of Bayesian inference for 2-D image processing, on which the novel surface estimation approach bases, are presented afterwards. For this new method, the localization of the fringe pattern is represented by the likelihood function, while the knowledge about the general surface properties goes into the prior probability of local height configurations. Both the 3-D data set and this prior are considered simultaneously in the estimation procedure, which analytically yields the optimum surface reconstruction as a mode of the marginal posterior probability. A method for quantitative comparison of height maps is developed and used to assess the performance of different postprocessing algorithms
    corecore